在这项工作中,我们提出了叙述,这是一种新颖的管道,可以以逼真的方式同时编辑肖像照明和观点。作为一种混合神经形态的面部模型,叙述了几何学感知生成方法和正常辅助物理面部模型的互补益处。简而言之,叙述首先将输入肖像转变为粗糙的几何形状,并采用神经渲染来产生类似于输入的图像,并产生令人信服的姿势变化。但是,反演步骤引入了不匹配,带来了较少面部细节的低质量图像。因此,我们进一步估计了师范的肖像,以增强粗糙的几何形状,从而创建高保真的物理面部模型。特别是,我们融合了神经和身体渲染,以补偿不完善的反转,从而产生了现实和视图一致的新颖透视图像。在重新阶段,以前的作品着重于单一视图肖像重新审议,但也忽略了不同观点之间的一致性,引导不稳定和不一致的照明效果以进行视图变化。我们通过将其多视图输入正常地图与物理面部模型统一,以解决此问题。叙事通过一致的正常地图进行重新进行重新,施加了跨视图的约束并表现出稳定且连贯的照明效果。我们在实验上证明,叙述在先前的工作中取得了更现实的,可靠的结果。我们进一步使用动画和样式转移工具进行介绍,从而分别或组合姿势变化,灯光变化,面部动画和样式转移,所有这些都以摄影质量为单位。我们展示了生动的自由视图面部动画以及3D感知可靠的风格化,可帮助促进各种AR/VR应用程序,例如虚拟摄影,3D视频会议和后期制作。
translated by 谷歌翻译
学习迭代收缩阈值算法(ListA)在一些收缩函数中引入了具有可学习阈值的深度展开模型,用于稀疏编码。绘制在一些理论上见解中,我们提倡基于误差的阈值(EBT)机制,用于列表,它利用了层面重建误差的函数来为每层的每个观察表达适当的阈值。我们表明EBT机制很好地解除了从重建错误中的收缩功能中的学习参数,使它们更适应各种观察。通过严谨的理论分析,除了其较高的适应性外,拟议的EBT可以基于列表及其变体来实现更快的收敛性。广泛的实验结果证实了我们的理论分析并验证了我们方法的有效性。
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
To facilitate research on text generation, this paper presents a comprehensive and unified library, TextBox 2.0, focusing on the use of pre-trained language models (PLMs). To be comprehensive, our library covers $13$ common text generation tasks and their corresponding $83$ datasets and further incorporates $45$ PLMs covering general, translation, Chinese, dialogue, controllable, distilled, prompting, and lightweight PLMs. We also implement $4$ efficient training strategies and provide $4$ generation objectives for pre-training new PLMs from scratch. To be unified, we design the interfaces to support the entire research pipeline (from data loading to training and evaluation), ensuring that each step can be fulfilled in a unified way. Despite the rich functionality, it is easy to use our library, either through the friendly Python API or command line. To validate the effectiveness of our library, we conduct extensive experiments and exemplify four types of research scenarios. The project is released at the link: https://github.com/RUCAIBox/TextBox.
translated by 谷歌翻译
Establishing open and general benchmarks has been a critical driving force behind the success of modern machine learning techniques. As machine learning is being applied to broader domains and tasks, there is a need to establish richer and more diverse benchmarks to better reflect the reality of the application scenarios. Graph learning is an emerging field of machine learning that urgently needs more and better benchmarks. To accommodate the need, we introduce Graph Learning Indexer (GLI), a benchmark curation platform for graph learning. In comparison to existing graph learning benchmark libraries, GLI highlights two novel design objectives. First, GLI is designed to incentivize \emph{dataset contributors}. In particular, we incorporate various measures to minimize the effort of contributing and maintaining a dataset, increase the usability of the contributed dataset, as well as encourage attributions to different contributors of the dataset. Second, GLI is designed to curate a knowledge base, instead of a plain collection, of benchmark datasets. We use multiple sources of meta information to augment the benchmark datasets with \emph{rich characteristics}, so that they can be easily selected and used in downstream research or development. The source code of GLI is available at \url{https://github.com/Graph-Learning-Benchmarks/gli}.
translated by 谷歌翻译
Neural networks, especially the recent proposed neural operator models, are increasingly being used to find the solution operator of differential equations. Compared to traditional numerical solvers, they are much faster and more efficient in practical applications. However, one critical issue is that training neural operator models require large amount of ground truth data, which usually comes from the slow numerical solvers. In this paper, we propose a physics-guided data augmentation (PGDA) method to improve the accuracy and generalization of neural operator models. Training data is augmented naturally through the physical properties of differential equations such as linearity and translation. We demonstrate the advantage of PGDA on a variety of linear differential equations, showing that PGDA can improve the sample complexity and is robust to distributional shift.
translated by 谷歌翻译
Accurate polyp segmentation is of great importance for colorectal cancer diagnosis and treatment. However, due to the high cost of producing accurate mask annotations, existing polyp segmentation methods suffer from severe data shortage and impaired model generalization. Reversely, coarse polyp bounding box annotations are more accessible. Thus, in this paper, we propose a boosted BoxPolyp model to make full use of both accurate mask and extra coarse box annotations. In practice, box annotations are applied to alleviate the over-fitting issue of previous polyp segmentation models, which generate fine-grained polyp area through the iterative boosted segmentation model. To achieve this goal, a fusion filter sampling (FFS) module is firstly proposed to generate pixel-wise pseudo labels from box annotations with less noise, leading to significant performance improvements. Besides, considering the appearance consistency of the same polyp, an image consistency (IC) loss is designed. Such IC loss explicitly narrows the distance between features extracted by two different networks, which improves the robustness of the model. Note that our BoxPolyp is a plug-and-play model, which can be merged into any appealing backbone. Quantitative and qualitative experimental results on five challenging benchmarks confirm that our proposed model outperforms previous state-of-the-art methods by a large margin.
translated by 谷歌翻译
Prompt tuning has been employed as an efficient way to adapt large vision-language pre-trained models (e.g. CLIP) to various downstream tasks in data-limited or label-limited settings. Nonetheless, visual data (e.g., images) is by default prerequisite for learning prompts in existing methods. In this work, we advocate that the effectiveness of image-text contrastive learning in aligning the two modalities (for training CLIP) further makes it feasible to treat texts as images for prompt tuning and introduce TaI prompting. In contrast to the visual data, text descriptions are easy to collect, and their class labels can be directly derived. Particularly, we apply TaI prompting to multi-label image recognition, where sentences in the wild serve as alternatives to images for prompt tuning. Moreover, with TaI, double-grained prompt tuning (TaI-DPT) is further presented to extract both coarse-grained and fine-grained embeddings for enhancing the multi-label recognition performance. Experimental results show that our proposed TaI-DPT outperforms zero-shot CLIP by a large margin on multiple benchmarks, e.g., MS-COCO, VOC2007, and NUS-WIDE, while it can be combined with existing methods of prompting from images to improve recognition performance further. Code is released at https://github.com/guozix/TaI-DPT.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
在过去几十年中,功能选择吸引了很多关注,因为它可以降低数据维度,同时保持功能的原始物理含义,这比功能提取可以更好地解释性。但是,大多数现有的功能选择方法,尤其是基于深度学习的方法,通常集中在仅具有很高分数的功能上,但忽略了那些在训练过程中得分较低的人以及重要的候选功能的顺序。这可能是有风险的,因为不幸的是,在培训过程中可能会忽略一些重要和相关的功能,从而导致次优的解决方案或误导性选择。在我们的工作中,我们通过利用较少重要性分数的功能来处理功能选择,并根据新颖的互补功能掩码提出功能选择框架。我们的方法是通用的,可以轻松地集成到现有的基于深度学习的特征选择方法中,以提高其性能。实验是在基准数据集上进行的,并表明所提出的方法可以选择比艺术状态更具代表性和信息性的特征。
translated by 谷歌翻译